40 research outputs found

    LMC + Scratch: a recipe to construct a mental model of program execution

    Get PDF
    Understanding how programs execute is one of the critical activities in the learning journey of a programmer. A novice constructs a mental model of program execution while learning programming. Any misconceptions at this stage lead to the development of a discrepant mental model. If left untreated, learning in advanced subjects like data structures and compiler construction may suffer. One of the ways to prevent the situation is carefully and explicitly unveiling the details of program execution. We employed Little Man Computer (LMC) for this purpose. Its interactive visual interface helped them internalise how software interacted with the hardware to achieve the programmer's objective. After spending a few sessions on the programming of LMC, we moved to Scratch. Scratch is a much higher-level language than the LMC assembly. So, while introducing Scratch programming constructs, we mapped the LMC equivalents of these instructions. The strategy helped evade several misconceptions by developing a deep understanding of the program execution model. It also served as a building block for introducing other concepts like state, abstraction, the need for higher-level languages and the role of compilers etc. We tried this approach in an Introduction to Computer Science module where most students had zero or very minimal exposure to programming. We received positive feedback from students and other fellow teachers teaching in the subsequent semesters

    On-the-fly simplification of genetic programming models

    Get PDF
    The last decade has seen amazing performance improvements in deep learning. However, the black-box nature of this approach makes it difficult to provide explanations of the generated models. In some fields such as psychology and neuroscience, this limitation in explainability and interpretability is an important issue. Approaches such as genetic programming are well positioned to take the lead in these fields because of their inherent white box nature. Genetic programming, inspired by Darwinian theory of evolution, is a population-based search technique capable of exploring a highdimensional search space intelligently and discovering multiple solutions. However, it is prone to generate very large solutions, a phenomenon often called “bloat”. The bloated solutions are not easily understandable. In this paper, we propose two techniques for simplifying the generated models. Both techniques are tested by generating models for a well-known psychology experiment. The validity of these techniques is further tested by applying them to a symbolic regression problem. Several population dynamics are studied to make sure that these techniques are not compromising diversity – an important measure for finding better solutions. The results indicate that the two techniques can be both applied independently and simultaneously and that they are capable of finding solutions at par with those generated by the standard GP algorithm – but with significantly reduced program size. There was no loss in diversity nor reduction in overall fitness. In fact, in some experiments, the two techniques even improved fitness

    Simplification of genetic programs: a literature survey

    Get PDF
    Genetic programming (GP), a widely used evolutionary computing technique, suffers from bloat—the problem of excessive growth in individuals’ sizes. As a result, its ability to efficiently explore complex search spaces reduces. The resulting solutions are less robust and generalisable. Moreover, it is difficult to understand and explain models which contain bloat. This phenomenon is well researched, primarily from the angle of controlling bloat: instead, our focus in this paper is to review the literature from an explainability point of view, by looking at how simplification can make GP models more explainable by reducing their sizes. Simplification is a code editing technique whose primary purpose is to make GP models more explainable. However, it can offer bloat control as an additional benefit when implemented and applied with caution. Researchers have proposed several simplification techniques and adopted various strategies to implement them. We organise the literature along multiple axes to identify the relative strengths and weaknesses of simplification techniques and to identify emerging trends and areas for future exploration. We highlight design and integration challenges and propose several avenues for research. One of them is to consider simplification as a standalone operator, rather than an extension of the standard crossover or mutation operators. Its role is then more clearly complementary to other GP operators, and it can be integrated as an optional feature into an existing GP setup. Another proposed avenue is to explore the lack of utilisation of complexity measures in simplification. So far, size is the most discussed measure, with only two pieces of prior work pointing out the benefits of using time as a measure when controlling bloat

    Computational scientific discovery in psychology

    Get PDF
    Scientific discovery is a driving force for progress, involving creative problem-solving processes to further our understanding of the world. Historically, the process of scientific discovery has been intensive and time-consuming; however, advances in computational power and algorithms have provided an efficient route to make new discoveries. Complex tools using artificial intelligence (AI) can efficiently analyse data as well as generate new hypotheses and theories. Along with AI becoming increasingly prevalent in our daily lives and the services we access, its application to different scientific domains is becoming more widespread. For example, AI has been used for early detection of medical conditions, identifying treatments and vaccines (e.g., against COVID-19), and predicting protein structure. The application of AI in psychological science has started to become popular. AI can assist in new discoveries both as a tool that allows more freedom to scientists to generate new theories, and by making creative discoveries autonomously. Conversely, psychological concepts such as heuristics have refined and improved artificial systems. With such powerful systems, however, there are key ethical and practical issues to consider. This review addresses the current and future directions of computational scientific discovery generally and its applications in psychological science more specifically

    Genetic Programming for Developing Simple Cognitive Models

    Get PDF
    ©2023 The Author(s). This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY), https://creativecommons.org/licenses/by/4.0/Frequently in psychology, simple tasks that are designed to tap a particular feature of cognition are used without considering the other mechanisms that might be at play. For example, the delayed-match-to-sample (DMTS) task is often used to examine short-term memory; however, a number of cognitive mechanisms interact to produce the observed behaviour, such as decision-making and attention processes. As these simple tasks form the basis of more complex psychological experiments and theories, it is critical to understand what strategies might be producing the recorded behaviour. The current paper uses the GEMS methodology, a system that generates models of cognition using genetic programming, and applies it to differing DMTS experimental conditions. We investigate the strategies that participants might be using, while looking at similarities and differences in strategy depending on task variations; in this case, changes to the interval between study and recall affected the strategies used by the generated models

    Heuristic Search of Heuristics

    Get PDF
    © 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG. All rights are reserved by the Publisher. his is the accepted manuscript version of a conference paper which has been published in final form at https://doi.org/10.1007/978-3-031-47994-6How can we infer the strategies that human participants adopt to carry out a task? One possibility, which we present and discuss here, is to develop a large number of strategies that participants could have adopted, given a cognitive architecture and a set of possible operations. Subsequently, the (often many) strategies that best explain a dataset of interest are highlighted. To generate and select candidate strategies, we use genetic programming, a heuristic search method inspired by evolutionary principles. Specifically, combinations of cognitive operators are evolved and their performance compared against human participants’ performance on a specific task. We apply this methodology to a typical decision-making task, in which human participants were asked to select the brighter of two stimuli. We discover several understandable, psychologically-plausible strategies that offer explanations of participants’ performance. The strengths, applications and challenges of this methodology are discussed

    Evolving Understandable Cognitive Models

    Get PDF
    © 2022 The Author(s), published by the Applied Cognitive Science Lab, Penn State. This is the accepted manuscript version of a conference paper which has been published in final form at http://www.frankritter.com/papers/ICCM2022Proceedings.pdfCognitive models for explaining and predicting human performance in experimental settings are often challenging to develop and verify. We describe a process to automatically generate the programs for cognitive models from a user-supplied specification, using genetic programming (GP). We first construct a suitable fitness function, taking into account observed error and reaction times. Then we introduce post-processing techniques to transform the large number of candidate models produced by GP into a smaller set of models, whose diversity can be depicted graphically and can be individually studied through pseudo-code. These techniques are demonstrated on a typical neuro-scientific task, the Delayed Match to Sample Task, with the final set of symbolic models separated into two types, each employing a different attentional strategy

    Smoking status and mortality outcomes following percutaneous coronary intervention

    Get PDF
    Objective: The aim of this study was to assess the impact of smoking on short (30-day) and intermediate (30-day to 6-month) mortality following percutaneous coronary intervention (PCI). Background: The effect of smoking on mortality post-PCI is lacking in the modern PCI era. Methods: This was a retrospective analysis of prospectively collected data comparing short- and intermediate-term mortality amongst smokers, ex-smokers and non-smokers. Results: The study cohort consisted of 12,656 patients: never-smokers (n = 4288), ex-smokers (n = 4806) and current smokers (n = 3562). The mean age (±standard deviation) was 57 (±11) years in current smokers compared with 67 (±11) in ex-smokers and 67 (±12) in never-smokers; p < 0.0001. PCI was performed for acute coronary syndrome (ACS) in 84.1% of current smokers, 57% of ex-smokers and 62.9% in never-smokers; p < 0.0001. In a logistic regression model, the adjusted odds ratios (95% confidence intervals (CIs)) for 30-day mortality were 1.60 (1.10–2.32) in current smokers and 0.98 (0.70–1.38) in ex-smokers compared with never-smokers. In the Cox proportional hazard model, the adjusted hazard ratios (95% CI) for mortality between 30 days and 6 months were 1.03 (0.65–1.65) in current smokers and 1.19 (0.84–1.67) in ex-smokers compared with never-smokers. Conclusion: This large observational study of non-selected patients demonstrates that ex-smokers and never-smokers are of similar age at first presentation to PCI, and there is no short- or intermediate-term mortality difference between them following PCI. Current smokers undergo PCI at a younger age, more often for ACS, and have higher short-term mortality. These findings underscore the public message on the benefits of smoking cessation and the harmful effects of smoking

    Trust in cognitive models: understandability and computational reliabilism

    No full text
    The realm of knowledge production, once considered a solely human endeavour, has transformed with the rising prominence of artificial intelligence. AI not only generates new forms of knowledge but also plays a substantial role in scientific discovery. This development raises a fundamental question: can we trust knowledge generated by AI systems? Cognitive modelling, a field at the intersection between psychology and computer science that aims to comprehend human behaviour under various experimental conditions, underscores the importance of trust. To address this concern, we identified understandability and computational reliabilism as two essential aspects of trustworthiness in cognitive modelling. This paper delves into both dimensions of trust, taking as case study a system for semi-automatically generating cognitive models. These models evolved interactively as computer programs using genetic programming. The selection of genetic programming, coupled with simplification algorithms, aims to create understandable cognitive models. To discuss reliability, we adopted computational reliabilism and demonstrate how our test-driven software development methodology instils reliability in the model generation process and the models themselves

    Squelettes algorithmiques méta-programmés : implantations, performances et sémantique

    No full text
    Structured parallelism approaches are a trade-off between automatic parallelisation and concurrent and distributed programming such as Pthreads and MPI. Skeletal parallelism is one of the structured approaches. An algorithmic skeleton can be seen as higher-order function that captures a pattern of a parallel algorithm such as a pipeline, a parallel reduction, etc. Often the sequential semantics of the skeleton is quite simple and corresponds to the usual semantics of similar higher-order functions in functional programming languages. The user constructs a parallel program by combined calls to the available skeletons. When one is designing a parallel program, the parallel performance is of course important. It is thus very interesting for the programmer to rely on a simple yet realistic parallel performance model. Bulk Synchronous Parallelism (BSP) offers such a model. As the parallelism can now be found everywhere from smart-phones to the super computers, it becomes critical for the parallel programming models to support the proof of correctness of the programs developed with them. . The outcome of this work is the Orléans Skeleton Library or OSL. OSL provides a set of data parallel skeletons which follow the BSP model of parallel computation. OSL is a library for C++ currently implemented on top of MPI and using advanced C++ techniques to offer good efficiency. With OSL being based over the BSP performance model, it is possible not only to predict the performances of the application but also provides the portability of performance. The programming model of OSL is formalized using the big-step semantics in the Coq proof assistant. Based on this formal model the correctness of an OSL example is proved.Les approches de parallélisme structuré sont un compromis entre la parallélisation automatique et la programmation concurrentes et réparties telle qu'offerte par MPI ou les Pthreads. Le parallélisme à squelettes est l'une de ces approches. Un squelette algorithmique peut être vu comme une fonction d'ordre supérieur qui capture un algorithme parallèle classique tel qu'un pipeline ou une réduction parallèle. Souvent la sémantique des squelettes est simple et correspondant à celle de fonctions d'ordre supérieur similaire dans les langages de programmation fonctionnels. L'utilisation combine les squelettes disponibles pour construire son application parallèle. Lorsqu'un programme parallèle est conçu, les performances sont bien sûr importantes. Il est ainsi très intéressant pour le programmeur de disposer d'un modèle de performance, simple mais réaliste. Le parallélisme quasi-synchrone (BSP) offre un tel modèle. Le parallélisme étant présent maintenant dans toutes les machines, du téléphone au super-calculateur, il est important que les modèles de programmation s'appuient sur des sémantiques formelles pour permettre la vérification de programmes. Les travaux menés on conduit à la conception et au développement de la bibliothèque Orléans Skeleton Library ou OSL. OSL fournit un ensemble de squelettes algorithmiques data-parallèles quasi-synchrones. OSL est une bibliothèque pour le langage C++ et utilise des techniques de programmation avancées pour atteindre une bonne efficacité. Les communications se basent sur la bibliothèque MPI. OSL étant basée sur le modèle BSP, il est possible non seulement de prévoir les performances des programmes OSL mais également de fournir une portabilité des performances. Le modèle de programmation d'OSL a été formalisé dans l'assistant de preuve Coq. L'utilisation de cette sémantique pour la preuve de programmes est illustrée par un exemple
    corecore